Counting and Sampling Fall 2017 Lecture 19 : Log Concavity in Optimization
نویسنده
چکیده
In the last lecture we discussed applications of log concavity of real rooted polynomials in designing a deterministic approximation algorithm for permanent. We also introduced the class H-stable polynomial. Recall that a polynomial p(z1, . . . , zn) is H -stable if p(z1, . . . , zn) 6= 0 whenever =zi > 0 for all i. In the last lecture we saw that any univariate H-stable polynomial (with real coefficients) is real rooted. We also showed that any real rooted polynomial with non-negative coefficients is log-concave. Therefore, univariate H-stable polynomials with nonnegative coefficients is log-concave in its variable. It turns out that there is a bigger story. In fact, any H-stable polynomial p(z1, . . . , zn) with nonnegative coefficients is log concave in z1, . . . , zn. Lemma 19.1. Any H stable polynomial p ∈ R+[z1, ,̇zn] with nonnegative coefficients is log concave in its variables.
منابع مشابه
Counting and Sampling Fall 2017 Lecture 7 : Advanced Coupling & Mixing Time via Eigenvalues
In this lecture first we discuss [HV05] to prove that the Glauber dynamics generates a random coloring of a graph G with maximum degree ∆ using q ≥ 1.764∆ colors in O(n log n). Our main motivation is to introduce additional more technical tools in coupling, beyond the path coupling technique. We prove the following theorem Theorem 7.1. Let α ≈ 1.763 . . . satisfies α = e. If G is triangle free ...
متن کاملCounting and Sampling Fall 2017 Lecture 14 : Barvinok ’ s Method : A Deterministic Algorithm for Permanent
Recall that the theorem of Jerrum-Sinclair-Vigoda [JSV04] shows that as long as A ≥ 0 we can use MCMC technique to give a 1+ approximation to per(A). But, if the entries of A can be negative (or even a complex number) we have no other tool besides this theorem to estimate per(A). To prove this theorem, we use an elegant machinery of Barvinok. A weaker version of this theorem first appeared in [...
متن کاملShape Constrained Density Estimation via Penalized Rényi Divergence
Abstract. Shape constraints play an increasingly prominent role in nonparametric function estimation. While considerable recent attention has been focused on log concavity as a regularizing device in nonparametric density estimation, weaker forms of concavity constraints encompassing larger classes of densities have received less attention but offer some additional flexibility. Heavier tail beh...
متن کاملLog-concavity Results on Gaussian Process Methods for Supervised and Unsupervised Learning
Log-concavity is an important property in the context of optimization, Laplace approximation, and sampling; Bayesian methods based on Gaussian process priors have become quite popular recently for classification, regression, density estimation, and point process intensity estimation. Here we prove that the predictive densities corresponding to each of these applications are log-concave, given a...
متن کاملof Counting and Sampling 1 - 3 1 . 2 Equivalence of Counting and Sampling
In this course we will discuss several class of approaches for these problems. Apriori one can think of two general framework: (i) To construct a probability distribution that is almost the same as π(.) and generate sample from that, and (ii) to approximately compute the partition function, Z, and use that recursively to generate samples. We will see the equivalence of counting and sampling in ...
متن کامل